CO tests becomes flaky on a cluster upgrade failure#30780
CO tests becomes flaky on a cluster upgrade failure#30780hongkailiu wants to merge 2 commits intoopenshift:mainfrom
Conversation
This pull skips all CO tests on SNO. - `Available=False` and `Degrade=True` are not checked at all no matter if the test case is executed in an upgrade test suite, or not. Before it was handled as an exception and thus the job would be just flaky instead of failing. Thus, the relevant exceptions can be removed. - All checks on the `Progressing` condition are skipped as well on a SNO cluster. The logging logic was inherited if it fails to determine the control plane topology because I am not sure on which type of clusters an error will show up.
We do not want to report testing results if the upgrade failed. However, the relevant COs might be the root cause of the upgrade failure. Instead of completely skipping the test cases, we make them flaky to still bubble up some signals. However, the signal for the flakiness is weak so that it does not lead to a job failure. This strategy is already implemented for Available and Degraded conditions. We extend it to Progressing as well. There is another place to check Progressing [1] which is out of the scope of `monitortest` and thus determining a failing upgrade is challenging for it. For now, let us ignore that case. [1]. https://github.com/openshift/origin/blob/50bae192e1195e41949279128562f5e861f35d72/test/extended/machines/scale.go#L269
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: hongkailiu The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Scheduling required tests: |
|
@hongkailiu: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
We do not want to report testing results if the upgrade failed.
However, the relevant COs might be the root cause of the upgrade
failure. Instead of completely skipping the test cases, we make
them flaky to still bubble up some signals. However, the signal
for the flakiness is weak so that it does not lead to a job failure.
This strategy is already implemented for Available and Degraded
conditions. We extend it to Progressing as well.
There is another place to check Progressing [1] which is out of
the scope of
monitortestand thus determining a failing upgradeis challenging for it. For now, let us ignore that case.
[1].
origin/test/extended/machines/scale.go
Line 269 in 50bae19
/hold
will rebase after #30775 gets in.